35 research outputs found

    Pencil Puzzles for Introductory Computer Science: an Experience- and Gender-Neutral Context

    Get PDF
    The teaching of introductory computer science can benefit from the use of real-world context to ground the abstract programming concepts. We present the domain of pencil puzzles as a context for a variety of introductory CS topics. Pencil puzzles are puzzles typically found in newspapers and magazines, intended to be solved by the reader through the means of deduction, using only a pencil. A well-known ex- ample of a pencil puzzle is Sudoku, which has been widely used as a typical backtracking assignment. However, there are dozens of other well-tried and liked pencil puzzles avail- able that naturally induce computational thinking and can be used as context for many CS topics such as arrays, loops, recursion, GUIs, inheritance and graph traversal. Our con- tributions in this paper are two-fold. First, we present a few pencil puzzles and map them to introductory CS concepts that the puzzles can target in an assignment, and point the reader to other puzzle repositories which provide the poten- tial to lead to an almost limitless set of introductory CS assignments. Second, we have formally evaluated the effec- tiveness of such assignments used at our institution over the past three years. Students reported that they have learned the material, believe they can tackle similar problems, and have improved their coding skills. The assignments also led to a significantly higher proportion of unsolicited statements of enjoyment, as well as metacognition, when compared to a traditional assignment for the same topic. Lastly, for all but one assignment, the student’s gender or prior programming experience was independent of their grade, their perceptions of and reflection on the assignment

    Computing and counting longest paths on circular-arc graphs in polynomial time.

    Get PDF
    The longest path problem asks for a path with the largest number of vertices in a given graph. The first polynomial time algorithm (with running time O(n4)) has been recently developed for interval graphs. Even though interval and circular-arc graphs look superficially similar, they differ substantially, as circular-arc graphs are not perfect. In this paper, we prove that for every path P of a circular-arc graph G, we can appropriately “cut” the circle, such that the obtained (not induced) interval subgraph Gâ€Č of G admits a path Pâ€Č on the same vertices as P. This non-trivial result is of independent interest, as it suggests a generic reduction of a number of path problems on circular-arc graphs to the case of interval graphs with a multiplicative linear time overhead of O(n). As an application of this reduction, we present the first polynomial algorithm for the longest path problem on circular-arc graphs, which turns out to have the same running time O(n4) with the one on interval graphs, as we manage to get rid of the linear overhead of the reduction. This algorithm computes in the same time an n-approximation of the number of different vertex sets that provide a longest path; in the case where G is an interval graph, we compute the exact number. Moreover, our algorithm can be directly extended with the same running time to the case where every vertex has an arbitrary positive weight

    Fast sampling via spectral independence beyond bounded-degree graphs

    Full text link
    Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal O(nlog⁥n)O(n \log n) sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using LpL^p-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of LpL^p-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the LpL^p-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph G(n,d/n)G(n,d/n), where the previously known algorithms run in time nO(log⁥d)n^{O(\log d)} or applied only to large dd. We refine these algorithmic bounds significantly, and develop fast n1+o(1)n^{1+o(1)} algorithms based on Glauber dynamics that apply to all dd, throughout the uniqueness regime

    Graph model selection using maximum likelihood

    Get PDF
    In recent years, there has been a proliferation of theoretical graph models, e.g., preferential attachment and small-world models, motivated by real-world graphs such as the Internet topology. To address the natural question of which model is best for a particular data set, we propose a model selection criterion for graph models. Since each model is in fact a probability distribution over graphs, we suggest using Maximum Likelihood to compare graph models and select their parameters. Interestingly, for the case of graph models, computing likelihoods is a difficult algorithmic task. However, we design and implement MCMC algorithms for computing the maximum likelihood for four popular models: a power-law random graph model, a preferential attachment model, a small-world model, and a uniform random graph model. We hope that this novel use of ML will objectify comparisons between graph models. 1

    Approximation via Correlation Decay When Strong Spatial Mixing Fails

    Get PDF
    Approximate counting via correlation decay is the core algorithmic technique used in the sharp delineation of the computational phase transition that arises in the approximation of the partition function of anti-ferromagnetic two-spin models. Previous analyses of correlation-decay algorithms implicitly depended on the occurrence of strong spatial mixing. This, roughly, means that one uses worst-case analysis of the recursive procedure that creates the sub-instances. In this paper, we develop a new analysis method that is more refined than the worst-case analysis. We take the shape of instances in the computation tree into consideration and we amortise against certain “bad” instances that are created as the recursion proceeds. This enables us to show correlation decay and to obtain an FPTAS even when strong spatial mixing fails. We apply our technique to the problem of approximately counting independent sets in hypergraphs with degree upper-bound ∆ and with a lower bound k on the arity of hyperedges. Liu and Lin gave an FPTAS for k ≄ 2 and ∆ ≀ 5 (lack of strong spatial mixing was the obstacle preventing this algorithm from being generalised to ∆ = 6). Our technique gives a tight result for ∆ = 6, showing that there is an FPTAS for k ≄ 3 and ∆ ≀ 6. The best previously-known approximation scheme for ∆ = 6 is the Markov-chain simulation based FPRAS of Bordewich, Dyer and Karpinski, which only works for k ≄ 8. Our technique also applies for larger values of k, giving an FPTAS for k ≄ 1.66∆. This bound is not as strong as existing randomised results, for technical reasons that are discussed in the paper. Nevertheless, it gives the first deterministic approximation schemes in this regime. We further demonstrate that in the hypergraph independent set model, approximating the partition function is NP-hard even within the uniqueness regime.</p

    Approximation via Correlation Decay when Strong Spatial Mixing Fails

    Get PDF
    Approximate counting via correlation decay is the core algorithmic technique used in the sharp delineation of the computational phase transition that arises in the approximation of the partition function of antiferromagnetic 2-spin models. Previous analyses of correlation-decay algorithms implicitly depended on the occurrence of strong spatial mixing. This, roughly, means that one uses worst-case analysis of the recursive procedure that creates the subinstances. In this paper, we develop a new analysis method that is more refined than the worst-case analysis. We take the shape of instances in the computation tree into consideration and we amortize against certain “bad” instances that are created as the recursion proceeds. This enables us to show correlation decay and to obtain a fully polynomial-time approximation scheme (FPTAS) even when strong spatial mixing fails. We apply our technique to the problem of approximately counting independent sets in hypergraphs with degree upper bound Δ\Delta and with a lower bound kk on the arity of hyperedges. Liu and Lin gave an FPTAS for k≄2k\geq2 and Δ≀5\Delta\leq5 (lack of strong spatial mixing was the obstacle preventing this algorithm from being generalized to Δ=6\Delta=6). Our technique gives a tight result for Δ=6\Delta=6, showing that there is an FPTAS for k≄3k\geq3 and Δ≀6\Delta\leq6. The best previously known approximation scheme for Δ=6\Delta=6 is the Markov-chain simulation based fully polynomial-time randomized approximation scheme (FPRAS) of Bordewich, Dyer, and Karpinski, which only works for k≄8k\geq8. Our technique also applies for larger values of kk, giving an FPTAS for k≄Δk\geq\Delta. This bound is not substantially stronger than existing randomized results in the literature. Nevertheless, it gives the first deterministic approximation scheme in this regime. Moreover, unlike existing results, it leads to an FPTAS for counting dominating sets in regular graphs with sufficiently large degree. We further demonstrate that in the hypergraph independent set model, approximating the partition function is NP-hard even within the uniqueness regime. Also, approximately counting dominating sets of bounded-degree graphs (without the regularity restriction) is NP-hard

    A Uniform Methodology for Ranking Internet Topology Models

    No full text
    Abstract — In recent years, there has been a proliferation of theoretical graph models, e.g., preferential attachment, motivated by real-world graphs such as the Web or Internet topology. Typically these models are designed to mimic particular properties observed in the graphs, such as power-law degree distribution or the small-world phenomenon. The mainstream approach to comparing models for these graphs has been somewhat subjective and very application dependent. Comparisons are often based on specific graph properties, without adequate justification for prioritizing some properties over others. We propose to use the Maximum Likelihood Estimation (MLE) principle to compare graph models: models are scored by the probability with which they generate the real data. Our methodology has several advantages. It is uniform, in that its definition does not presuppose any information about the data or the models. It is unambiguous, in that it yields a clearly defined score for each model, and thus an ordering of models. Moreover, it can be used to determine the best values of the parameters for a given model. We demonstrate the feasibility of the approach by designing and implementing algorithms computing the probability for four natural models: a power-law random graph model, a preferential attachment model, a small-world model, and a uniform random graph model. We tested our algorithms on three different snapshots of the AS-level Internet topology. We found that the preferential attachment model performed the best, closely followed by the power-law model, with the other two models lagging behind. An interesting aspect of the findings is the fact that the optimal parameters for the power-law models have not changed significantly over time, even though the size of the data has grown by an order of magnitude. I
    corecore